我们考虑检测混合成员加权网络的潜在社区信息的问题,其中节点具有混合成员资格,并且在节点之间连接的边缘可以是有限的实数。我们为此问题提出了一般混合成员分布的模型。该模型没有边缘的分布限制,而是只有预期值,并且可以被视为某些以前模型的概括。我们使用高效的频谱算法来估计模型下的社区成员资格。我们还使用精致光谱分析来得出算法下提出算法的收敛速度。当边缘遵循不同的分布时,我们展示了混合成员资格分布模型的优势在于利用应用于小规模的模拟网络。
translated by 谷歌翻译
在网络分析中广泛研究了未加权网络的社区检测,但加权网络的情况仍然是一个挑战。在本文中,提出了一种可分布的模型(DFM),用于网络中的网络被划分为不同的社区。DFM是未加权网络和加权网络的一般,可解释和可识别的模型。所提出的模型不需要先前了解邻接矩阵元素的特定分布,但仅是预期值。DFM的无分配性能甚至允许邻接矩阵具有负元素。我们开发一种高效的谱算法来适合DFM。通过引入噪声矩阵,我们在扰动分析构建一个理论框架,以表明所提出的算法在DFM下稳定地产生一致的群落检测。综合网络和来自文献的两个社交网络的数值实验用于说明算法。
translated by 谷歌翻译
考虑带有$ k_ {r} $行社区和$ k_ {c} $列社区的定向网络。以前的作品发现,建模定向网络,其中所有节点都具有重叠属性需要$ k_ {r} = k_ {c} $ for可识别性。在本文中,我们提出了一个重叠和正当的模型,以研究指示网络,其中行节点在列节点不具有重叠的属性。当$ k_ {r} \ leq k_ {c} $时,所提出的模型是可识别的。同时,我们提供一个可识别的模型,作为ONM的扩展到模型的指导网络,具有节点度的变化。两种谱算法具有一致估计的理论保证,旨在适合模型。小规模的数值研究用于说明算法。
translated by 谷歌翻译
定向网络出现在各种领域,例如生物学,社会学,生理学和计算机科学。在本文中,我们构建一种基于邻接矩阵的奇异分解的光谱聚类方法,以检测定向随机块模型(DISBM)中的群落。通过考虑稀疏性参数,在轻度条件下,我们显示所提出的方法可以始终如一地恢复隐藏的行和列社区以进行不同程度的缩放。通过考虑行和列节点的程度异质性,我们进一步修改了所提出的方法,并为导向度校正随机块模型(DIDCSBM)建立理论框架,并显示了这种情况的修改方法的一致性。我们在DIBM和DIDCSBM下的理论结果提供了一些特殊定向网络的一些创新,例如具有平衡集群的定向网络,具有相似程度的节点的定向网络,以及指导的ERD \“OS-R”enyi图。此外,理论上,理论Didcsbm下的结果与分数下的结果一致。
translated by 谷歌翻译
无向网络的混合隶属问题在网络分析中得到了很好的研究近年来。但是,对于定向网络的混合成员资格的更常例案例仍然是一个挑战。在这里,我们提出了一个可解释和可识别的模型:针对定向混合隶属网络定向混合成员资格随机块模型(短路)。 DIMMSB允许邻接矩阵的行节点和列节点可以是不同的,并且这些节点可以在定向网络中具有不同的社区结构。我们还开发了一种有效的谱算法,称为DISP,基于人口邻接矩阵的左右奇异矢量中固有的单纯x结构,以估计定向网络中的两行节点和列节点的混合成员资格。我们以使用精细光谱分析的每个行节点和每个列节点的推断成员载体和每个列节点的误差限制,显示该分辨率在温和条件下是渐近的一致性。我们展示了DISP与模拟定向混合会员网络,指导政治博客网络和论文引文网络的优势。
translated by 谷歌翻译
Human parsing aims to partition humans in image or video into multiple pixel-level semantic parts. In the last decade, it has gained significantly increased interest in the computer vision community and has been utilized in a broad range of practical applications, from security monitoring, to social media, to visual special effects, just to name a few. Although deep learning-based human parsing solutions have made remarkable achievements, many important concepts, existing challenges, and potential research directions are still confusing. In this survey, we comprehensively review three core sub-tasks: single human parsing, multiple human parsing, and video human parsing, by introducing their respective task settings, background concepts, relevant problems and applications, representative literature, and datasets. We also present quantitative performance comparisons of the reviewed methods on benchmark datasets. Additionally, to promote sustainable development of the community, we put forward a transformer-based human parsing framework, providing a high-performance baseline for follow-up research through universal, concise, and extensible solutions. Finally, we point out a set of under-investigated open issues in this field and suggest new directions for future study. We also provide a regularly updated project page, to continuously track recent developments in this fast-advancing field: https://github.com/soeaver/awesome-human-parsing.
translated by 谷歌翻译
Supervised Deep-Learning (DL)-based reconstruction algorithms have shown state-of-the-art results for highly-undersampled dynamic Magnetic Resonance Imaging (MRI) reconstruction. However, the requirement of excessive high-quality ground-truth data hinders their applications due to the generalization problem. Recently, Implicit Neural Representation (INR) has appeared as a powerful DL-based tool for solving the inverse problem by characterizing the attributes of a signal as a continuous function of corresponding coordinates in an unsupervised manner. In this work, we proposed an INR-based method to improve dynamic MRI reconstruction from highly undersampled k-space data, which only takes spatiotemporal coordinates as inputs. Specifically, the proposed INR represents the dynamic MRI images as an implicit function and encodes them into neural networks. The weights of the network are learned from sparsely-acquired (k, t)-space data itself only, without external training datasets or prior images. Benefiting from the strong implicit continuity regularization of INR together with explicit regularization for low-rankness and sparsity, our proposed method outperforms the compared scan-specific methods at various acceleration factors. E.g., experiments on retrospective cardiac cine datasets show an improvement of 5.5 ~ 7.1 dB in PSNR for extremely high accelerations (up to 41.6-fold). The high-quality and inner continuity of the images provided by INR has great potential to further improve the spatiotemporal resolution of dynamic MRI, without the need of any training data.
translated by 谷歌翻译
The counting task, which plays a fundamental rule in numerous applications (e.g., crowd counting, traffic statistics), aims to predict the number of objects with various densities. Existing object counting tasks are designed for a single object class. However, it is inevitable to encounter newly coming data with new classes in our real world. We name this scenario as \textit{evolving object counting}. In this paper, we build the first evolving object counting dataset and propose a unified object counting network as the first attempt to address this task. The proposed model consists of two key components: a class-agnostic mask module and a class-increment module. The class-agnostic mask module learns generic object occupation prior via predicting a class-agnostic binary mask (e.g., 1 denotes there exists an object at the considering position in an image and 0 otherwise). The class-increment module is used to handle new coming classes and provides discriminative class guidance for density map prediction. The combined outputs of class-agnostic mask module and image feature extractor are used to predict the final density map. When new classes come, we first add new neural nodes into the last regression and classification layers of this module. Then, instead of retraining the model from scratch, we utilize knowledge distilling to help the model remember what have already learned about previous object classes. We also employ a support sample bank to store a small number of typical training samples of each class, which are used to prevent the model from forgetting key information of old data. With this design, our model can efficiently and effectively adapt to new coming classes while keeping good performance on already seen data without large-scale retraining. Extensive experiments on the collected dataset demonstrate the favorable performance.
translated by 谷歌翻译
Video Super-Resolution (VSR) aims to restore high-resolution (HR) videos from low-resolution (LR) videos. Existing VSR techniques usually recover HR frames by extracting pertinent textures from nearby frames with known degradation processes. Despite significant progress, grand challenges are remained to effectively extract and transmit high-quality textures from high-degraded low-quality sequences, such as blur, additive noises, and compression artifacts. In this work, a novel Frequency-Transformer (FTVSR) is proposed for handling low-quality videos that carry out self-attention in a combined space-time-frequency domain. First, video frames are split into patches and each patch is transformed into spectral maps in which each channel represents a frequency band. It permits a fine-grained self-attention on each frequency band, so that real visual texture can be distinguished from artifacts. Second, a novel dual frequency attention (DFA) mechanism is proposed to capture the global frequency relations and local frequency relations, which can handle different complicated degradation processes in real-world scenarios. Third, we explore different self-attention schemes for video processing in the frequency domain and discover that a ``divided attention'' which conducts a joint space-frequency attention before applying temporal-frequency attention, leads to the best video enhancement quality. Extensive experiments on three widely-used VSR datasets show that FTVSR outperforms state-of-the-art methods on different low-quality videos with clear visual margins. Code and pre-trained models are available at https://github.com/researchmm/FTVSR.
translated by 谷歌翻译
In this paper, we consider an intelligent reflecting surface (IRS)-aided cell-free massive multiple-input multiple-output system, where the beamforming at access points and the phase shifts at IRSs are jointly optimized to maximize energy efficiency (EE). To solve EE maximization problem, we propose an iterative optimization algorithm by using quadratic transform and Lagrangian dual transform to find the optimum beamforming and phase shifts. However, the proposed algorithm suffers from high computational complexity, which hinders its application in some practical scenarios. Responding to this, we further propose a deep learning based approach for joint beamforming and phase shifts design. Specifically, a two-stage deep neural network is trained offline using the unsupervised learning manner, which is then deployed online for the predictions of beamforming and phase shifts. Simulation results show that compared with the iterative optimization algorithm and the genetic algorithm, the unsupervised learning based approach has higher EE performance and lower running time.
translated by 谷歌翻译